deep automodulator
Author response: ' Deep Automodulators ' NeurIPS: #6295
We thank the reviewers for their insights. We summarize the overall response as positive. We address the concerns in the order received. There might be a misunderstanding here. The sole concern of R1 is that our model is an "extension of the idea in " However, the provided references [1, 2] do not show that BEGAN could do such a thing.
Deep Automodulators
We introduce a new category of generative autoencoders called automodulators. These networks can faithfully reproduce individual real-world input images like regular autoencoders, but also generate a fused sample from an arbitrary combination of several such images, allowing instantaneous "style-mixing" and other new applications. An automodulator decouples the data flow of decoder operations from statistical properties thereof and uses the latent vector to modulate the former by the latter, with a principled approach for mutual disentanglement of decoder layers. Prior work has explored similar decoder architecture with GANs, but their focus has been on random sampling. A corresponding autoencoder could operate on real input images.